Christmas Island
Google planning powerful AI data center on tiny Australian outpost
Red crabs walk across a road in Christmas Island, Australia, in October. SYDNEY - Google plans to build a large artificial intelligence data center on Australia's remote Indian Ocean outpost of Christmas Island after signing a cloud deal with the Department of Defence earlier this year, according to documents and interviews with officials. Plans for the data center on the tiny island located 350 kilometers south of Indonesia have not previously been reported, and many details including its projected size, cost and potential uses, remain secret. However, military experts say such a facility would be a valuable asset on the island, which is increasingly seen by defense officials as a critical front line in monitoring Chinese submarine and other naval activity in the Indian Ocean. In a time of both misinformation and too much information, quality journalism is more crucial than ever.
- Oceania > Australia > Australian Indian Ocean Territories > Christmas Island (0.47)
- Indian Ocean (0.47)
- Asia > Indonesia (0.25)
- (8 more...)
- Information Technology > Cloud Computing (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Social Media (0.79)
The Download: carbon removal's future, and measuring pain using an app
Plus: Meta's lawyers advised staff to remove parts of their research After years of growth that spawned hundreds of startups, the nascent carbon removal sector appears to be facing a reckoning. Running Tide, a promising aquaculture company, shut down its operations last summer, and a handful of other companies have shuttered, downsized, or pivoted in recent months as well. And the collective industry hasn't made a whole lot more progress toward Running Tide's ambitious plans to sequester a billion tons of carbon dioxide by this year. The hype phase is over and the sector is sliding into the turbulent business trough that follows, experts warn. And the open question is: If the carbon removal sector is heading into a painful if inevitable clearing-out cycle, where will it go from there? This story is part of MIT Technology Review's What's Next series, which looks across industries, trends, and technologies to give you a first look at the future.
- Oceania > Australia > Australian Indian Ocean Territories > Christmas Island (0.05)
- North America > United States > Massachusetts (0.05)
- Asia > South Korea (0.05)
- Asia > China (0.05)
- Law (0.72)
- Information Technology (0.48)
- Health & Medicine > Therapeutic Area (0.31)
- Government > Regional Government (0.30)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.71)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.49)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.30)
Evaluating Large Language Models for IUCN Red List Species Information
Large Language Models (LLMs) are rapidly being adopted in conservation to address the biodiversity crisis, yet their reliability for species evaluation is uncertain. This study systematically validates five leading models on 21,955 species across four core IUCN Red List assessment components: taxonomy, conservation status, distribution, and threats. A critical paradox was revealed: models excelled at taxonomic classification (94.9%) but consistently failed at conservation reasoning (27.2% for status assessment). This knowledge-reasoning gap, evident across all models, suggests inherent architectural constraints, not just data limitations. Furthermore, models exhibited systematic biases favoring charismatic vertebrates, potentially amplifying existing conservation inequities. These findings delineate clear boundaries for responsible LLM deployment: they are powerful tools for information retrieval but require human oversight for judgment-based decisions. A hybrid approach is recommended, where LLMs augment expert capacity while human experts retain sole authority over risk assessment and policy.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.70)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.93)
Navigation Variable-based Multi-objective Particle Swarm Optimization for UAV Path Planning with Kinematic Constraints
Duong, Thi Thuy Ngan, Bui, Duy-Nam, Phung, Manh Duong
Path planning is essential for unmanned aerial vehicles (UAVs) as it determines the path that the UAV needs to follow to complete a task. This work addresses this problem by introducing a new algorithm called navigation variable-based multi-objective particle swarm optimization (NMOPSO). It first models path planning as an optimization problem via the definition of a set of objective functions that include optimality and safety requirements for UAV operation. The NMOPSO is then used to minimize those functions through Pareto optimal solutions. The algorithm features a new path representation based on navigation variables to include kinematic constraints and exploit the maneuverable characteristics of the UAV. It also includes an adaptive mutation mechanism to enhance the diversity of the swarm for better solutions. Comparisons with various algorithms have been carried out to benchmark the proposed approach. The results indicate that the NMOPSO performs better than not only other particle swarm optimization variants but also other state-of-the-art multi-objective and metaheuristic optimization algorithms. Experiments have also been conducted with real UAVs to confirm the validity of the approach for practical flights. The source code of the algorithm is available at https://github.com/ngandng/NMOPSO.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > South Korea > Ulsan > Ulsan (0.04)
- Oceania > Australia > Australian Indian Ocean Territories > Christmas Island (0.04)
- (3 more...)
- Aerospace & Defense (0.48)
- Information Technology > Robotics & Automation (0.48)
Sunken WWII US destroyer, known as 'Dancing Mouse,' discovered 80 years after battle with Japanese
The wreckage of the USS Edsall, an American warship that was sunk during a battle with Japanese forces in World War II, has been discovered more than 80 years after it was lost at the bottom of the sea, U.S. and Australian officials announced Monday. The final resting place of the USS Edsall, a Clemson-class destroyer, was discovered late last year at the bottom of the Indian Ocean, according to the U.S. Navy and Royal Australian Navy. "Working in collaboration with the U.S. Navy, the Royal Australian Navy used advanced robotic and autonomous systems, normally used for hydrographic survey capabilities, to locate USS Edsall on the sea-bed," Chief of Royal Australian Navy, Vice Admiral Mark Hammond, said in a statement. The warship was sunk on March 1, 1942, three months after the attack on Pearl Harbor, during an encounter with Japanese battleships and dive bombers. The USS Edsall was a Clemson-class destroyer, measuring 314 feet in length and capable of 35 knots.
- North America > United States (1.00)
- Indian Ocean (0.26)
- Oceania > Australia > Australian Indian Ocean Territories > Christmas Island (0.06)
- (2 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Navy (1.00)
MIRAI: Evaluating LLM Agents for Event Forecasting
Ye, Chenchen, Hu, Ziniu, Deng, Yihe, Huang, Zijie, Ma, Mingyu Derek, Zhu, Yanqiao, Wang, Wei
Recent advancements in Large Language Models (LLMs) have empowered LLM agents to autonomously collect world information, over which to conduct reasoning to solve complex problems. Given this capability, increasing interests have been put into employing LLM agents for predicting international events, which can influence decision-making and shape policy development on an international scale. Despite such a growing interest, there is a lack of a rigorous benchmark of LLM agents' forecasting capability and reliability. To address this gap, we introduce MIRAI, a novel benchmark designed to systematically evaluate LLM agents as temporal forecasters in the context of international events. Our benchmark features an agentic environment with tools for accessing an extensive database of historical, structured events and textual news articles. We refine the GDELT event database with careful cleaning and parsing to curate a series of relational prediction tasks with varying forecasting horizons, assessing LLM agents' abilities from short-term to long-term forecasting. We further implement APIs to enable LLM agents to utilize different tools via a code-based interface. In summary, MIRAI comprehensively evaluates the agents' capabilities in three dimensions: 1) autonomously source and integrate critical information from large global databases; 2) write codes using domain-specific APIs and libraries for tool-use; and 3) jointly reason over historical knowledge from diverse formats and time to accurately predict future events. Through comprehensive benchmarking, we aim to establish a reliable framework for assessing the capabilities of LLM agents in forecasting international events, thereby contributing to the development of more accurate and trustworthy models for international relation analysis.
- Asia > North Korea (0.14)
- Oceania > Australia > Australian Indian Ocean Territories > Territory of Cocos (Keeling) Islands (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- (234 more...)
- Law (1.00)
- Government > Foreign Policy (1.00)
- Government > Military (0.93)
- Information Technology (0.92)
Unlock the Future of Autonomous Drones with Innovative Secure Runtime Assurance (SRTA)
By submitting this content request, I have legitimate interest in the content and agree that Technology Innovation Institute, their partners, and the creators of any other content I have selected may contact me regarding news, products, and services that may be of interest to me. By submitting this content request, I have legitimate interest in the content and agree that Technology Innovation Institute, their partners, and the creators of any other content I have selected may contact me regarding news, products, and services that may be of interest to me. I agree to the IEEE Privacy Policy Are you an IEEE member?
- Oceania > Australia > Australian Indian Ocean Territories > Territory of Cocos (Keeling) Islands (0.15)
- Asia > China > Hong Kong (0.15)
- Oceania > Samoa (0.07)
- (285 more...)
- Health & Medicine (0.49)
- Consumer Products & Services (0.49)
- Government (0.31)
Digital Divides in Scene Recognition: Uncovering Socioeconomic Biases in Deep Learning Systems
Greene, Michelle R., Josyula, Mariam, Si, Wentao, Hart, Jennifer A.
Computer-based scene understanding has influenced fields ranging from urban planning to autonomous vehicle performance, yet little is known about how well these technologies work across social differences. We investigate the biases of deep convolutional neural networks (dCNNs) in scene classification, using nearly one million images from global and US sources, including user-submitted home photographs and Airbnb listings. We applied statistical models to quantify the impact of socioeconomic indicators such as family income, Human Development Index (HDI), and demographic factors from public data sources (CIA and US Census) on dCNN performance. Our analyses revealed significant socioeconomic bias, where pretrained dCNNs demonstrated lower classification accuracy, lower classification confidence, and a higher tendency to assign labels that could be offensive when applied to homes (e.g., "ruin", "slum"), especially in images from homes with lower socioeconomic status (SES). This trend is consistent across two datasets of international images and within the diverse economic and racial landscapes of the United States. This research contributes to understanding biases in computer vision, emphasizing the need for more inclusive and representative training datasets. By mitigating the bias in the computer vision pipelines, we can ensure fairer and more equitable outcomes for applied computer vision, including home valuation and smart home security systems. There is urgency in addressing these biases, which can significantly impact critical decisions in urban development and resource allocation. Our findings also motivate the development of AI systems that better understand and serve diverse communities, moving towards technology that equitably benefits all sectors of society.
- North America > United States (0.67)
- Oceania > Samoa (0.04)
- Oceania > Pitcairn (0.04)
- (204 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Smart Houses & Appliances (0.54)
- Health & Medicine > Public Health (0.48)
- Banking & Finance > Economy (0.46)
Detecting agreement in multi-party dialogue: evaluating speaker diarisation versus a procedural baseline to enhance user engagement
Addlesee, Angus, Denley, Daniel, Edmondson, Andy, Gunson, Nancie, Garcia, Daniel Hernández, Kha, Alexandre, Lemon, Oliver, Ndubuisi, James, O'Reilly, Neil, Perochaud, Lia, Valeri, Raphaël, Worika, Miebaka
Conversational agents participating in multi-party interactions face significant challenges in dialogue state tracking, since the identity of the speaker adds significant contextual meaning. It is common to utilise diarisation models to identify the speaker. However, it is not clear if these are accurate enough to correctly identify specific conversational events such as agreement or disagreement during a real-time interaction. This study uses a cooperative quiz, where the conversational agent acts as quiz-show host, to determine whether diarisation or a frequency-and-proximity-based method is more accurate at determining agreement, and whether this translates to feelings of engagement from the players. Experimental results show that our procedural system was more engaging to players, and was more accurate at detecting agreement, reaching an average accuracy of 0.44 compared to 0.28 for the diarised system.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Oceania > Australia > Australian Indian Ocean Territories > Christmas Island (0.05)
- North America > Antigua and Barbuda (0.05)
- (10 more...)
COVID-VTS: Fact Extraction and Verification on Short Video Platforms
Liu, Fuxiao, Yacoob, Yaser, Shrivastava, Abhinav
We introduce a new benchmark, COVID-VTS, for fact-checking multi-modal information involving short-duration videos with COVID19- focused information from both the real world and machine generation. We propose, TwtrDetective, an effective model incorporating cross-media consistency checking to detect token-level malicious tampering in different modalities, and generate explanations. Due to the scarcity of training data, we also develop an efficient and scalable approach to automatically generate misleading video posts by event manipulation or adversarial matching. We investigate several state-of-the-art models and demonstrate the superiority of TwtrDetective.
- Europe > Slovakia (0.14)
- Asia > China (0.05)
- Oceania > Australia > Australian Indian Ocean Territories > Christmas Island (0.04)
- (5 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)